Notes on Using Control Variates for Estimation with Reversible MCMC Samplers
نویسندگان
چکیده
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
منابع مشابه
2 5 Ju n 20 15 Markov Interacting Importance Samplers
We introduce a new Markov chain Monte Carlo (MCMC) sampler called the Markov Interacting Importance Sampler (MIIS). The MIIS sampler uses conditional importance sampling (IS) approximations to jointly sample the current state of the Markov Chain and estimate conditional expectations, possibly by incorporating a full range of variance reduction techniques. We compute Rao-Blackwellized estimates ...
متن کاملDetection and estimation of signals by reversible jump Markov chain Monte Carlo computations
Markov Chain Monte Carlo (MCMC) samplers have been a very powerful methodology for estimating signal parameters. With the introduction of the reversible jump MCMC sampler, which is a Metropolis-Hastings method adapted to general state spaces, the potential of the MCMC methods has risen to a new level. Consequently, the MCMC methods currently play a major role in many research activities. In thi...
متن کاملStatic-parameter estimation in piecewise deterministic processes using particle Gibbs samplers
We develop particle Gibbs samplers for static-parameter estimation in discretelyobserved piecewise deterministic processes (pdps). pdps are stochastic processes that jump randomly at a countable number of stopping times but otherwise evolve deterministically in continuous time. A sequential Monte Carlo (smc) sampler for ltering in pdps has recently been proposed. We rst provide new insight into...
متن کاملImplementing componentwise Hastings algorithms
Markov chain Monte Carlo (MCMC) routines have revolutionized the application of Monte Carlo methods in statistical application and statistical computing methodology. The Hastings sampler, encompassing both the Gibbs and Metropolis samplers as special cases, is the most commonly applied MCMC algorithm. The performance of the Hastings sampler relies heavily on the choice of sweep strategy, that i...
متن کاملControl Variates for Stochastic Gradient MCMC
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. We compare the performance of two classes of methods which aim to solve this issue: stochastic gradient MCMC (SGMCMC), and divide and conquer methods. We find an SGMCMC method, stochastic gradient Langevin dynamics (SGLD) to be the most robust in these comparisons. This method makes use of a noisy esti...
متن کامل